Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 1, 2026
-
Free, publicly-accessible full text available August 1, 2026
-
na (Ed.)Nitrous oxide (N2O) emissions from agriculture are rising due to increased fertilizer use and intensive farming, posing a major challenge for climate mitigation. This study introduces a novel reinforcement learning (RL) framework to optimize farm management strategies that balance crop productivity with environmental impact, particularly N2O emissions. By modeling agricultural decision-making as a partially observable Markov decision process (POMDP), the framework accounts for uncertainties in environmental conditions and observational data. The approach integrates deep Q-learning with recurrent neural networks (RNNs) to train adaptive agents within a simulated farming environment. A Probabilistic Deep Learning (PDL) model was developed to estimate N2O emissions, achieving a high Prediction Interval Coverage Probability (PICP) of 0.937 within a 95% confidence interval on the available dataset. While the PDL model’s generalizability is currently constrained by the limited observational data, the RL framework itself is designed for broad applicability, capable of extending to diverse agricultural practices and environmental conditions. Results demonstrate that RL agents reduce N2O emissions without compromising yields, even under climatic variability. The framework’s flexibility allows for future integration of expanded datasets or alternative emission models, ensuring scalability as more field data becomes available. This work highlights the potential of artificial intelligence to advance climate-smart agriculture by simultaneously addressing productivity and sustainability goals in dynamic real-world settings.more » « lessFree, publicly-accessible full text available August 1, 2026
-
Diffusion models have recently been successfully applied to a wide range of robotics applications for learning complex multi-modal behaviors from data. However, prior works have mostly been confined to single-robot and small-scale environments due to the high sample complexity of learning multi-robot diffusion models. In this paper, we propose a method for generating collision-free multi-robot trajectories that conform to underlying data distributions while using only single-robot data. Our algorithm, Multi-robot Multi-model planning Diffusion (MMD), does so by combining learned diffusion models with classical search-based techniques – generating data-driven motions under collision constraints. Scaling further, we show how to compose multiple diffusion models to plan in large environments where a single diffusion model fails to generalize well. We demonstrate the effectiveness of our approach in planning for dozens of robots in a variety of simulated scenarios motivated by logistics environments. View video demonstrations in our supplementary material, and our code at: github.com/yoraish/mmd.more » « lessFree, publicly-accessible full text available April 24, 2026
-
Free, publicly-accessible full text available April 24, 2026
-
We study the asymptotic behavior, uniform-in-time, of a nonlinear dynamical system under the combined effects of fast periodic sampling with period [Formula: see text] and small white noise of size [Formula: see text]. The dynamics depend on both the current and recent measurements of the state, and as such it is not Markovian. Our main results can be interpreted as Law of Large Numbers (LLN) and Central Limit Theorem (CLT) type results. LLN type result shows that the resulting stochastic process is close to an ordinary differential equation (ODE) uniformly in time as [Formula: see text] Further, in regards to CLT, we provide quantitative and uniform-in-time control of the fluctuations process. The interaction of the small parameters provides an additional drift term in the limiting fluctuations, which captures both the sampling and noise effects. As a consequence, we obtain a first-order perturbation expansion of the stochastic process along with time-independent estimates on the remainder. The zeroth- and first-order terms in the expansion are given by an ODE and SDE, respectively. Simulation studies that illustrate and supplement the theoretical results are also provided.more » « lessFree, publicly-accessible full text available February 1, 2026
-
Free, publicly-accessible full text available January 22, 2026
An official website of the United States government
